2024-08-14T09:26:43,909423234+00:00
SSH is crucial for a server. It is the door that enables system management to be performed. So, it needs to be secured to prevent unauthorized access to the system.
I once configured the ssh service to only listen to specific address, but the downside is another layer of network is needed (i.e. the Virtual Private Network tunnel) just to access the ssh service.
What if I am on the road and don’t want to open a VPN tunnel but I want to access the server for some quick administration? I’ve come to this approach by knowing that most reverse proxying solutions come with capabilities of proxying raw TCP packet (i.e. layer 4 reverse proxying).
I’ve come to this setup thanks to Haproxy blog explaining this
method1. The package haproxy
is
needed for this.
The configuration I’ve used is by setting up haproxy
to
route tcp connections based on TLS Server Name Indication. First, I set
up another backend to act as TLS termination ssh on localhost. Here is a
snippet of haproxy.conf
with the aforementioned setup.
listen tls_to_ssh
mode tcp
bind 127.0.0.1:2222 ssl crt /etc/ssl/private/ssh.pem
server localhost_ssh 127.0.0.1:22
Of course the key and certificate need to be present at specified location. Here is how to generate those keys.
cd /etc/ssl/private
umask 077
# Here I use prime256v1 ECDSA key.
openssl ecparam -name prime256v1 -genkey -out ssh.pem.key
# In case RSA certificate is preferred, here is how to generate it. At least 2048 bit for RSA key to be secure.
# openssl genrsa -out ssh.pem.key 4096
openssl x509 -subj '/CN=my_cloud_server_ssh' -days 365 -key ssh.pem.key -signkey ssh.pem.key -new -out ssh.pem
Now, let’s define the frontend part which listens for tls connections on port 443 and selecting appropriate backend using the SNI. Here is relevant part of haproxy configuration which will be appended after haproxy default configuration section.
frontend tls_listener
mode tcp
bind :::443
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
default-backend tls_to_sni_backends
backend tls_to_sni_backends
mode tcp
acl web_server if req_ssl_sni -i mydeardiary.linkpc.net
acl ssh_server if req_ssl_sni -i my_cloud_server_ssh
option ssl-hello-chk
use-server web if web_server
use-server ssh if ssh_server
use-server web if !web_server !ssh_server
server web 127.0.0.1:4433 send-proxy
server ssh 127.0.0.1:2222
listen tls_to_ssh
mode tcp
bind 127.0.0.1:2222 ssl crt /etc/ssl/private/ssh.pem
server localhost_ssh 127.0.0.1:22
Now, make sure that everything is correct and restart haproxy service
systemctl restart haproxy.service
With this configuration, connecting to port 443 with SNI set to
my_cloud_server_ssh
will be greeted by OpenSSH banner.
The famous nginx
web server can also be used for this
same setup. On Debian, libnginx-mod-stream
is required for
layer 4 proxying (tcp routing). I’ve used this setup since downgrading
my vps to the lowest specification, because nginx consumes less memory
than haproxy.
Here is the relevant snippet from
/etc/nginx/nginx.conf
stream {
map $ssl_preread_server_name $name {
mydeardiary.linkpc.net web;
my_cloud_server_ssh ssh;
default web;
}
upstream web {
server 127.0.0.1:4433;
}
upstream ssh {
server 127.0.0.1:2222;
}
server {
listen 127.0.0.1:2222 ssl proxy_protocol;
ssl_certificate /etc/ssl/private/ssh.pem;
ssl_certificate_key /etc/ssl/private/ssh.pem.key;
set_real_ip_from 127.0.0.1;
proxy_pass 127.0.0.1:22;
}
server {
listen [::]:443;
listen 443;
proxy_pass $name;
ssl_preread on;
proxy_protocol on;
}
}
Restart nginx
to apply the setup. Now,
nginx
will route tls connections on port 443 according to
SNI as mentioned before.
It’s up to the the man who follows this guide. The resulting setup is the same: ssh service hidden behind tls port, accessible via correct SNI name.
From the client, check if SNI routing works.
openssl s_client -quiet -connect $my_server_ip:443 -servername my_cloud_server_ssh
# if ncat is installed, it can be used
ncat --ssl --ssl-servername my_cloud_server_ssh $my_server_ip 443
The response should be as follows if using openssl
command line.
Connecting to $my_server_ip
depth=0 CN=my_cloud_server_ssh
verify error:num=18:self-signed certificate
verify return:1
depth=0 CN=my_cloud_server_ssh
verify return:1
SSH-2.0-OpenSSH_9.2p1 Debian-2+deb12u3
^C
If using ncat
, the ssh banner will be displayed without
any tls information.
Now, to connect to the ssh server behind tls, just put one of those
commands as ProxyCommand
.
ProxyCmd="ncat --ssl --ssl-servername my_cloud_server_ssh $my_server_ip 443"
# if openssl command line is preferred, it can also be used as proxy command
# ProxyCmd="openssl s_client -quiet -connect $my_server_ip:443 -servername my_cloud_server_ssh"
ssh -o ProxyCommand="$ProxyCmd" $my_user@$my_server_ip
For convenience, the ProxyCommand
can be added to the
snippet of relevant server in the ~/.ssh/config
file.
Here is the relevant part of ~/.ssh/config
. Just replace
the relevant $variable
with correct value
Host my_cloud_server_behind_tls
Hostname $my_server_ip
User $my_user_name
IdentitiesOnly yes
IdentityFile ~/.ssh/$my_ssh_key
ProxyCommand ncat --ssl --ssl-servername my_cloud_server_ssh $my_server_ip 443
ServerAliveInterval 20
Now, with the above snippet appended to the
~/.ssh/config
, connecting to the server is just as easy as
typing this command.
ssh my_cloud_server_behind_tls
After confirming that accessing ssh service behind tls works, it’s time to limit port 22 to the localhost and Tailscale/Wireguard ip range.
ufw allow in from 127.0.0.1 to any app ssh
ufw allow in from ::1 to any app ssh
# tailscale cgnat 100.64.0.0/10 ip range
ufw allow in from 100.64.0.0/10 to any app ssh
ufw allow in from $wireguard_ip_range to any app ssh
# after confirming that everything works
ufw delete allow ssh
# reload the firewall
ufw reload
Now, the ssh log will be cleaner since there will be no more attempts from bad bots to access publicly opened ssh port.
I hope this guide will be useful and have a nice day!
Donate to the author on Ko-Fi.
Back to main page.